最近,与神经网络的时间相关微分方程的解决方案最近引起了很多关注。核心思想是学习控制解决方案从数据演变的法律,该数据可能会被随机噪声污染。但是,与其他机器学习应用相比,通常对手头的系统了解很多。例如,对于许多动态系统,诸如能量或(角度)动量之类的物理量是完全保守的。因此,神经网络必须从数据中学习这些保护定律,并且仅由于有限的训练时间和随机噪声而被满足。在本文中,我们提出了一种替代方法,该方法使用Noether的定理将保护定律本质地纳入神经网络的体系结构。我们证明,这可以更好地预测三个模型系统:在三维牛顿引力潜能中非偏见粒子的运动,Schwarzschild指标中庞大的相对论粒子的运动和两个相互作用的粒子在四个相互作用的粒子系统中的运动方面。
translated by 谷歌翻译
为了在人类环境中运作,机器人的语义感知必须克服开放世界的挑战,例如新颖的对象和域间隙。因此,在此类环境中的自主部署要求机器人在不监督的情况下更新其知识和学习。我们研究机器人如何在探索未知环境时如何自主发现新颖的语义类别并提高已知类别的准确性。为此,我们开发了一个通用框架来映射和聚类,然后使用该框架来生成自我监督的学习信号以更新语义分割模型。特别是,我们展示了如何在部署过程中优化聚类参数,并且与先前的工作相比,多种观察方式的融合可以改善新颖的对象发现。
translated by 谷歌翻译
可靠的跟踪算法对于自动驾驶至关重要。但是,现有的一致性措施不足以满足汽车部门日益增长的安全需求。因此,这项工作提出了一种基于卡尔曼过滤和主观逻辑的混乱中单对象跟踪自我评估的新方法。该方法的一个关键特征是,它还提供了在线可靠性评分中收集的统计证据的量度。这样,可靠性的各个方面,例如假定的测量噪声,检测概率和混乱速率的正确性,除了基于可用证据的整体评估外,还可以监视。在这里,我们提出了用于研究问题的自我评估模块中使用的参考分布的数学推导。此外,我们介绍了一个公式,该公式描述了如何为冲突程度选择阈值,这是用于可靠性决策的主观逻辑比较度量。我们的方法在旨在建模不利天气条件的挑战性模拟场景中进行了评估。模拟表明,我们的方法可以显着提高多个方面杂物中单对象跟踪的可靠性检查。
translated by 谷歌翻译
与经典线性模型不同,非线性生成模型在统计学习的文献中被稀疏地解决。这项工作旨在引起对这些模型及其保密潜力的关注。为此,我们调用了复制方法,以在反相反的问题中得出渐近归一化的横熵,其生成模型由具有通用协方差函数的高斯随机场描述。我们的推导进一步证明了贝叶斯估计量的渐近统计解耦,并为给定的非线性模型指定了解耦设置。复制解决方案描述了严格的非线性模型建立了全有或全无的相变:存在一个关键负载,最佳贝叶斯推断从完美的学习变为不相关的学习。基于这一发现,我们设计了一种新的安全编码方案,该方案可实现窃听通道的保密能力。这个有趣的结果意味着,严格的非线性生成模型是完美的,没有任何安全编码。我们通过分析说明性模型的完全安全和可靠的推论来证明后一种陈述是合理的。
translated by 谷歌翻译
Explainable AI transforms opaque decision strategies of ML models into explanations that are interpretable by the user, for example, identifying the contribution of each input feature to the prediction at hand. Such explanations, however, entangle the potentially multiple factors that enter into the overall complex decision strategy. We propose to disentangle explanations by finding relevant subspaces in activation space that can be mapped to more abstract human-understandable concepts and enable a joint attribution on concepts and input features. To automatically extract the desired representation, we propose new subspace analysis formulations that extend the principle of PCA and subspace analysis to explanations. These novel analyses, which we call principal relevant component analysis (PRCA) and disentangled relevant subspace analysis (DRSA), optimize relevance of projected activations rather than the more traditional variance or kurtosis. This enables a much stronger focus on subspaces that are truly relevant for the prediction and the explanation, in particular, ignoring activations or concepts to which the prediction model is invariant. Our approach is general enough to work alongside common attribution techniques such as Shapley Value, Integrated Gradients, or LRP. Our proposed methods show to be practically useful and compare favorably to the state of the art as demonstrated on benchmarks and three use cases.
translated by 谷歌翻译
Kernel machines have sustained continuous progress in the field of quantum chemistry. In particular, they have proven to be successful in the low-data regime of force field reconstruction. This is because many physical invariances and symmetries can be incorporated into the kernel function to compensate for much larger datasets. So far, the scalability of this approach has however been hindered by its cubical runtime in the number of training points. While it is known, that iterative Krylov subspace solvers can overcome these burdens, they crucially rely on effective preconditioners, which are elusive in practice. Practical preconditioners need to be computationally efficient and numerically robust at the same time. Here, we consider the broad class of Nystr\"om-type methods to construct preconditioners based on successively more sophisticated low-rank approximations of the original kernel matrix, each of which provides a different set of computational trade-offs. All considered methods estimate the relevant subspace spanned by the kernel matrix columns using different strategies to identify a representative set of inducing points. Our comprehensive study covers the full spectrum of approaches, starting from naive random sampling to leverage score estimates and incomplete Cholesky factorizations, up to exact SVD decompositions.
translated by 谷歌翻译
Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address this need by presenting our development of a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic system. This work represents an improvement over previous efforts, which either focused on the matrix-multiplication mode of BrainScaleS-2 or lacked full automation. Our framework, called hxtorch.snn, enables the hardware-in-the-loop training of spiking neural networks within PyTorch, including support for auto differentiation in a fully-automated hardware experiment workflow. In addition, hxtorch.snn facilitates seamless transitions between emulating on hardware and simulating in software. We demonstrate the capabilities of hxtorch.snn on a classification task using the Yin-Yang dataset employing a gradient-based approach with surrogate gradients and densely sampled membrane observations from the BrainScaleS-2 hardware system.
translated by 谷歌翻译
This report summarizes the 3rd International Verification of Neural Networks Competition (VNN-COMP 2022), held as a part of the 5th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), which was collocated with the 34th International Conference on Computer-Aided Verification (CAV). VNN-COMP is held annually to facilitate the fair and objective comparison of state-of-the-art neural network verification tools, encourage the standardization of tool interfaces, and bring together the neural network verification community. To this end, standardized formats for networks (ONNX) and specification (VNN-LIB) were defined, tools were evaluated on equal-cost hardware (using an automatic evaluation pipeline based on AWS instances), and tool parameters were chosen by the participants before the final test sets were made public. In the 2022 iteration, 11 teams participated on a diverse set of 12 scored benchmarks. This report summarizes the rules, benchmarks, participating tools, results, and lessons learned from this iteration of this competition.
translated by 谷歌翻译
We apply the vision transformer, a deep machine learning model build around the attention mechanism, on mel-spectrogram representations of raw audio recordings. When adding mel-based data augmentation techniques and sample-weighting, we achieve comparable performance on both (PRS and CCS challenge) tasks of ComParE21, outperforming most single model baselines. We further introduce overlapping vertical patching and evaluate the influence of parameter configurations. Index Terms: audio classification, attention, mel-spectrogram, unbalanced data-sets, computational paralinguistics
translated by 谷歌翻译
The intersection of ground reaction forces in a small, point-like area above the center of mass has been observed in computer simulation models and human walking experiments. This intersection point is often called a virtual pivot point (VPP). With the VPP observed so ubiquitously, it is commonly assumed to provide postural stability for bipedal walking. In this study, we challenge this assumption by questioning if walking without a VPP is possible. Deriving gaits with a neuromuscular reflex model through multi-stage optimization, we found stable walking patterns that show no signs of the VPP-typical intersection of ground reaction forces. We, therefore, conclude that a VPP is not necessary for upright, stable walking. The non-VPP gaits found are stable and successfully rejected step-down perturbations, which indicates that a VPP is not primarily responsible for locomotion robustness or postural stability. However, a collision-based analysis indicates that non-VPP gaits increased the potential for collisions between the vectors of the center of mass velocity and ground reaction forces during walking, suggesting an increased mechanical cost of transport. Although our computer simulation results have yet to be confirmed through experimental studies, they already strongly challenge the existing explanation of the VPP's function and provide an alternative explanation.
translated by 谷歌翻译